52 research outputs found
Recommended from our members
Tripartite Organization of the Ventral Stream by Animacy and Object Size
Occipito-temporal cortex is known to house visual object representations, but the organization of the neural activation patterns along this cortex is still being discovered. Here we found a systematic, large-scale structure in the neural responses related to the interaction between two major cognitive dimensions of object representation: animacy and real-world size. Neural responses were measured with functional magnetic resonance imaging while human observers viewed images of big and small animals and big and small objects. We found that real-world size drives differential responses only in the object domain, not the animate domain, yielding a tripartite distinction in the space of object representation. Specifically, cortical zones with distinct response preferences for big objects, all animals, and small objects, are arranged in a spoked organization around the occipital pole, along a single ventromedial, to lateral, to dorsomedial axis. The preference zones are duplicated on the ventral and lateral surface of the brain. Such a duplication indicates that a yet unknown higher-order division of labor separates object processing into two substreams of the ventral visual pathway. Broadly, we suggest that these large-scale neural divisions reflect the major joints in the representational structure of objects and thus place informative constraints on the nature of the underlying cognitive architecture.Psycholog
The role of real-world size in object representation
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 117-128).Every object in the world has a physical size which is intrinsic to how we interact with it: we pick up small objects like coins with our fingers, we throw footballs and swing tennis rackets, we orient our body to bigger objects like chairs and tables and we navigate with respect to landmarks like fountains and buildings. Here I argue that the size of objects in the world is a basic property of object representation with both behavioral and neural consequences. Specifically, I suggest that objects have a canonical visual size based on their real-world size (Chapter 2), and that we automatically access real-world size information when we recognize an object (Chapter 3). Further, I present evidence that there are neural consequences of realworld size for the large-scale organization of object knowledge in ventral visual cortex (Chapter 4). Specifically, there are regions with differential selectivity for big and small objects, that span from along the dorsal and lateral surfaces of occipito-temporal cortex in a mirrored organization. Finally, I suggest that the empirical findings can be coherently explained by thinking about the experience of an observer situated in a three-dimensional world. This work provides testable predictions about retinal size biases in visual experience, and an approach in which to understand the neural representation of any object in the world.by Talia Konkle.Ph.D
Recommended from our members
Detecting changes in real-world objects: The relationship between visual long-term memory and change blindness
A large body of literature has shown that observers often fail to notice significant changes in visual scenes, even when these changes happen right in front of their eyes. For instance, people often fail to notice if their conversation partner is switched to another person, or if large background objects suddenly disappear.1,2 These 'change blindness' studies have led to the inference that the amount of information we remember about each item in a visual scene may be quite low.1 However, in recent work we have demonstrated that long-term memory is capable of storing a massive number of visual objects with significant detail about each item.3 In the present paper we attempt to reconcile these findings by demonstrating that observers do not experience 'change blindness' with the real world objects used in our previous experiment if they are given sufficient time to encode each item. The results reported here suggest that one of the major causes of change blindness for real-world objects is a lack of encoding time or attention to each object (see also refs. 4 and 5).Psycholog
Real-World Objects Are Not Represented as Bound Units: Independent Forgetting of Different Object Details from Visual Memory
Are real-world objects represented as bound units? Although a great deal of research has examined binding between the feature dimensions of simple shapes, little work has examined whether the featural properties of real-world objects are stored in a single unitary object representation. In a first experiment, we found that information about an object's color is forgotten more rapidly than the information about an object's state (e.g., open, closed), suggesting that observers do not forget objects as entirely bound units. In a second and third experiment, we examined whether state and exemplar information are forgotten separately or together. If these properties are forgotten separately, the probability of getting one feature correct should be independent of whether the other feature was correct. We found that after a short delay, observers frequently remember both state and exemplar information about the same objects, but after a longer delay, memory for the two properties becomes independent. This indicates that information about object state and exemplar are forgotten separately over time. We thus conclude that real-world objects are not represented in a single unitary representation in visual memory.Psycholog
Visual Awareness Is Limited by the Representational Architecture of the Visual System
Visual perception and awareness have strict limitations. We suggest that one source of these limitations is the representational architecture of the visual system. Under this view, the extent to which items activate the same neural channels constrains the amount of information that can be processed by the visual system and ultimately reach awareness. Here, we measured how well stimuli from different categories (e.g., faces and cars) blocked one another from reaching awareness using two distinct paradigms that render stimuli invisible: visual masking and continuous flash suppression. Next, we used fMRI to measure the similarity of the neural responses elicited by these categories across the entire visual hierarchy. Overall, we found strong brain–behavior correlations within the ventral pathway, weaker correlations in the dorsal pathway, and no correlations in early visual cortex (V1–V3). These results suggest that the organization of higher level visual cortex constrains visual awareness and the overall processing capacity of visual cognition.National Science Foundation (U.S.). Graduate Research FellowshipNational Institutes of Health (U.S.). Ruth L. Kirschstein National Research Service Award (F32EY024483
Recommended from our members
Visual Long-Term Memory Has the Same Limit on Fidelity as Visual Working Memory
Visual long-term memory can store thousands of objects with surprising visual detail, but just how detailed are these representations, and how can one quantify this fidelity? Using the property of color as a case study, we estimated the precision of visual information in long-term memory, and compared this with the precision of the same information in working memory. Observers were shown real-world objects in random colors and were asked to recall the colors after a delay. We quantified two parameters of performance: the variability of internal representations of color (fidelity) and the probability of forgetting an object’s color altogether. Surprisingly, the fidelity of color information in long-term memory was comparable to the asymptotic precision of working memory. These results suggest that long-term memory and working memory may be constrained by a common limit, such as a bound on the fidelity required to retrieve a memory representation.Psycholog
Sensitive Period for a Multimodal Response in Human Visual Motion Area
The middle temporal complex (MT/MST) is a brain region specialized for the perception of motion in the visual modality [ [1], [2], [3] and [4]]. However, this specialization is modified by visual experience: after long-standing blindness, MT/MST responds to sound [5]. Recent evidence also suggests that the auditory response of MT/MST is selective for motion [ [6] and [7]]. The developmental time course of this plasticity is not known. To test for a sensitive period in MT/MST development, we used fMRI to compare MT/MST function in congenitally blind, late-blind, and sighted adults. MT/MST responded to sound in congenitally blind adults, but not in late-blind or sighted adults, and not in an individual who lost his vision between ages of 2 and 3 years. All blind adults had reduced functional connectivity between MT/MST and other visual regions. Functional connectivity was increased between MT/MST and lateral prefrontal areas in congenitally blind relative to sighted and late-blind adults. These data suggest that early blindness affects the function of feedback projections from prefrontal cortex to MT/MST. We conclude that there is a sensitive period for visual specialization in MT/MST. During typical development, early visual experience either maintains or creates a vision-dominated response. Once established, this response profile is not altered by long-standing blindness.David and Lucille Packard FoundationNational Center for Research Resources: Harvard-Thorndike General Clinical Research Center at Beth Israel Deaconess Medical Center (NCRR MO1 RR01032)Harvard Clinical and Translational Science Center (UL1 RR025758)National Institutes of Health (U.S.) (grant K24 RR018875)National Institutes of Health (U.S.) (grant RO1-EY12091
Getting aligned on representational alignment
Biological and artificial information processing systems form representations
that they can use to categorize, reason, plan, navigate, and make decisions.
How can we measure the extent to which the representations formed by these
diverse systems agree? Do similarities in representations then translate into
similar behavior? How can a system's representations be modified to better
match those of another system? These questions pertaining to the study of
representational alignment are at the heart of some of the most active research
areas in cognitive science, neuroscience, and machine learning. For example,
cognitive scientists measure the representational alignment of multiple
individuals to identify shared cognitive priors, neuroscientists align fMRI
responses from multiple individuals into a shared representational space for
group-level analyses, and ML researchers distill knowledge from teacher models
into student models by increasing their alignment. Unfortunately, there is
limited knowledge transfer between research communities interested in
representational alignment, so progress in one field often ends up being
rediscovered independently in another. Thus, greater cross-field communication
would be advantageous. To improve communication between these fields, we
propose a unifying framework that can serve as a common language between
researchers studying representational alignment. We survey the literature from
all three fields and demonstrate how prior work fits into this framework.
Finally, we lay out open problems in representational alignment where progress
can benefit all three of these fields. We hope that our work can catalyze
cross-disciplinary collaboration and accelerate progress for all communities
studying and developing information processing systems. We note that this is a
working paper and encourage readers to reach out with their suggestions for
future revisions.Comment: Working paper, changes to be made in upcoming revision
- …